33 research outputs found

    Cascading Failures in Power Grids - Analysis and Algorithms

    Full text link
    This paper focuses on cascading line failures in the transmission system of the power grid. Recent large-scale power outages demonstrated the limitations of percolation- and epid- emic-based tools in modeling cascades. Hence, we study cascades by using computational tools and a linearized power flow model. We first obtain results regarding the Moore-Penrose pseudo-inverse of the power grid admittance matrix. Based on these results, we study the impact of a single line failure on the flows on other lines. We also illustrate via simulation the impact of the distance and resistance distance on the flow increase following a failure, and discuss the difference from the epidemic models. We then study the cascade properties, considering metrics such as the distance between failures and the fraction of demand (load) satisfied after the cascade (yield). We use the pseudo-inverse of admittance matrix to develop an efficient algorithm to identify the cascading failure evolution, which can be a building block for cascade mitigation. Finally, we show that finding the set of lines whose removal has the most significant impact (under various metrics) is NP-Hard and introduce a simple heuristic for the minimum yield problem. Overall, the results demonstrate that using the resistance distance and the pseudo-inverse of admittance matrix provides important insights and can support the development of efficient algorithms

    LINGUIST: Language Model Instruction Tuning to Generate Annotated Utterances for Intent Classification and Slot Tagging

    Full text link
    We present LINGUIST, a method for generating annotated data for Intent Classification and Slot Tagging (IC+ST), via fine-tuning AlexaTM 5B, a 5-billion-parameter multilingual sequence-to-sequence (seq2seq) model, on a flexible instruction prompt. In a 10-shot novel intent setting for the SNIPS dataset, LINGUIST surpasses state-of-the-art approaches (Back-Translation and Example Extrapolation) by a wide margin, showing absolute improvement for the target intents of +1.9 points on IC Recall and +2.5 points on ST F1 Score. In the zero-shot cross-lingual setting of the mATIS++ dataset, LINGUIST out-performs a strong baseline of Machine Translation with Slot Alignment by +4.14 points absolute on ST F1 Score across 6 languages, while matching performance on IC. Finally, we verify our results on an internal large-scale multilingual dataset for conversational agent IC+ST and show significant improvements over a baseline which uses Back-Translation, Paraphrasing and Slot Catalog Resampling. To our knowledge, we are the first to demonstrate instruction fine-tuning of a large-scale seq2seq model to control the outputs of multilingual intent- and slot-labeled data generation.Comment: Accepted to The 29th International Conference on Computational Linguistics (COLING 2022) October 12-17, 2022, Gyeongju, Republic of Korea https://coling2022.org

    AlexaTM 20B: Few-Shot Learning Using a Large-Scale Multilingual Seq2Seq Model

    Full text link
    In this work, we demonstrate that multilingual large-scale sequence-to-sequence (seq2seq) models, pre-trained on a mixture of denoising and Causal Language Modeling (CLM) tasks, are more efficient few-shot learners than decoder-only models on various tasks. In particular, we train a 20 billion parameter multilingual seq2seq model called Alexa Teacher Model (AlexaTM 20B) and show that it achieves state-of-the-art (SOTA) performance on 1-shot summarization tasks, outperforming a much larger 540B PaLM decoder model. AlexaTM 20B also achieves SOTA in 1-shot machine translation, especially for low-resource languages, across almost all language pairs supported by the model (Arabic, English, French, German, Hindi, Italian, Japanese, Marathi, Portuguese, Spanish, Tamil, and Telugu) on Flores-101 dataset. We also show in zero-shot setting, AlexaTM 20B outperforms GPT3 (175B) on SuperGLUE and SQuADv2 datasets and provides SOTA performance on multilingual tasks such as XNLI, XCOPA, Paws-X, and XWinograd. Overall, our results present a compelling case for seq2seq models as a powerful alternative to decoder-only models for Large-scale Language Model (LLM) training

    Line Failure Detection After a Cyber-Physical Attack on the Grid Using Bayesian Regression

    No full text
    corecore